![]() METHOD FOR MANAGING A PARITY NODE CALCULATION UNIT, EQUIPMENT AND SOFTWARE FOR IMPLEMENTING THE METH
专利摘要:
A method of managing a parity node calculating unit of an error correction code decoder capable of being represented by a bipartite graph comprising at least one parity node is provided, the parity node being configured to receive first and second input messages, and outputting an output message, the input and output message elements of the parity node including a symbol and a degree of reliability associated with the symbol, the first and second input messages comprising lists ordered elements according to their degree of reliability. The method comprises: initializing a plurality of nbub FIFO memories with elements computed from combinations of elements of the first and second input messages, and determining the values of the output message iteratively. 公开号:FR3016259A1 申请号:FR1450086 申请日:2014-01-07 公开日:2015-07-10 发明作者:Emmanuel Boutillon;Oussama Abassi;Laura Conde-Canencia 申请人:Centre National de la Recherche Scientifique CNRS;Universite de Bretagne Sud; IPC主号:
专利说明:
[0001] The present invention relates to a method for decoding non-binary LDPC codes, and to a decoder for implementing the method of the present invention. The present invention relates to a method for decoding non-binary LDPC codes, and to a decoder for implementing the method. process. Error correcting codes are very widely used in many fields of information technology, and particularly in the fields of information transmission between a transmitter and a receiver, and the storage / reading of information on a computer. storage medium. [0002] The LPDC codes are error-correcting codes of the category of linear block codes, whose parity matrix has the property of being hollow (in English, "low-density parity check matrix"), that is to say that is, it contains only a small number of nonzero elements compared to its total number of elements. They can indeed be characterized, like the linear block codes, by a so-called parity matrix, generally denoted H. The parity matrix H is linked to a so-called code generation matrix, generally denoted G, by the relation: G. HT = 0 where HT denotes the transposed matrix of H. The M x N dimensions of the parity matrix correspond, for the number of lines M, to the number of code parity constraints, and for the number of columns N, to the length of the code words of the considered code. The lines of the parity matrix H of a linear block code respectively corresponding to parity constraints which are by construction respected by the codewords, the equality v. HT = 0 will be checked for any code word y. As a reminder, a Galois field GF (q) is a finite set of q elements from which any element can be described as a function of a primitive element, denoted here a. In the following, the Galois body elements GF (q) will be denoted fo, cro, al, ..., aq-21. The LDPC codes whose symbols of the code words belong to the binary Galois body (of order 2), denoted GF (2), are called binary, while the LDPC codes whose symbols of the codewords belong to a body of Galois of order q strictly greater than 2, denoted GF (q), are said to be non-binary. Thus, the elements of a parity matrix of a non-binary LDPC code will belong to a non-binary Galois GF (q) body (q> 2) and the matrix products of the above equations will be made using the laws addition and multiplication of the body GF (q), denoted respectively ED and 0 in the following. [0003] The construction of these codes can also be determined by a set of parity equations on GF (q), where one of these equations involving d, codeword symbols is written: hi, kxk = 0, where hJ, k are the non-zero values on the ith line of H and a codeword is designated by X = x2, ..., xN), with (xk), k = 1 ... N is a symbol represented by m = log2 (q) bits denoted xk (x =, -k, 1 - xk, 2 --- Xk, m) - An example of a parity matrix H in the Galois field GF (4) whose elements are noted { 0; at ; a1; a2} of dimensions M = 3 and N = 6 is given below: 0 a ° a ° 0 a2 0 H = a1 0 0 a ° 0 a2 a ° 0 a2 0 0 al Many algorithms for iterative decoding of LDPC codes use a representation of the parity matrix of the code by a bipartite graph called "Tanner graph". For a parity matrix H of dimensions M x N, this representation maps, via branches M nodes, called "parity nodes" (in English "Check Node" or "CN") with N nodes, called "nodes". variable "(in English" Variable Node "or" VN "). Each non-zero element of the parity matrix is represented in the corresponding Tanner graph by a branch joining the parity node corresponding to the row of the element in the matrix H to the variable node corresponding to the column of the element. in the matrix H. Each parity node of the graph thus represents a parity equation determined by the branches which connect it to the nodes of 20 variables. The variable nodes receive words to be decoded or being decoded. FIG. 1 shows the Tanner graph corresponding to the example of the parity matrix of dimensions M = 3 and N = 6 given above. This graph comprises M = 3 parity nodes noted in Figure CN1, CN2 and CN3, and N = 6 25 variable nodes noted in Figure VN1, VN2, VN3, VN4, VN5 and VN6. The three parity equations corresponding respectively to the three parity nodes of the graph are: a ° VN2e) a ° VN3® a2 VN5 = 0 for the parity node CNi, al VNie a ° VN4 ®a2 VN6 = 0 for the parity node CN2, 30 a VNie a2 VN3 ®a1 VN6 = 0 for the parity node CN3, where the operators ED and respectively denote the addition and multiplication in the Galois GF (4) body. The Tanner graphical representation can be exploited for the implementation of algorithms whose efficiency has been shown on graph models, such as the so-called belief propagation algorithm. or "BP") or message passing type algorithms (or "message passing"). When applied to a bipartite graph with two different types of nodes, the BP algorithm relies on an iterative process of sending messages between nodes of each type connected by branches (so-called "neighbors"). Iterative LDPC decoding algorithms using the exchange of messages between the parity nodes and the variable nodes of the Tanner graph i.o corresponding to the LPDC code considered have thus been developed. These decoding algorithms may more generally be implemented or adapted for the decoding of all the linear block codes likely to be represented by a bipartite graph comprising a set of parity nodes and a set of variable nodes. In the case of soft-input decoding, the exchanged messages comprise possible values of the symbols of the processed words with which likelihood or reliability information is associated. For example, when the code considered is binary, and the symbols of the code words are in the Galois GF (2) body, that is to say binary values, the messages exchanged comprising densities of probability include two densities, one for the value "0" and the other for the value "1". The messages will include, for example, pairs of binary values to which likelihood (or reliability) values are respectively associated. When the code considered is non-binary, and the symbols of the code words are in values in the Galois field GF (q), with q> 2, the exchanged messages may comprise q reliability values, each corresponding to a element of GF (q), which can be represented in the form of a vector of size q of couples (symbol, value of reliability). A parity node of a non-binary LDPC decoder of a value code 30 in the Galois field GF (q) thus receives input messages and generates output messages. Each input and output message contains q pairs of values, one representing a symbol, and the other representing a reliability or likelihood associated with this symbol. If a direct implementation of the Belief Propagation (BP) decoding algorithm is used, an output is constructed by selecting the q best combinations from q at the power d, - 1. This leads to a computation complexity of the order of 0 (q2). The BP decoding algorithm can also be considered in the frequency domain. This is called BP algorithm based on the Fourier Transform-based BP algorithm. The passage in the frequency domain makes it possible to reduce the complexity of the BP algorithm, to reach a complexity of the order of 0 (cl, x q x log (q)). The fact remains that the implementation of the BP algorithm has a very high cost in terms of computational complexity, a cost that becomes prohibitive when considering values of q greater than 16. Different algorithms have been proposed to overcome this problem. this problem of high complexity, among which the algorithm called Extended Min-Sum (EMS), which proposes to use truncated messages by selecting the nm most reliable symbols, nm being chosen much lower than q (nm "g) . The messages are then sorted before being introduced to the parity node. A parity node is composed of a combination of elementary parity nodes, where each elementary parity node receives two sorted messages and each containing nm pairs (symbol, reliability) from which it generates an output message containing the best nm combinations. possible for the two input messages, the total number of combinations being equal to nm at the power 2. The following article can be seen for a detailed description of the EMS algorithm: Decoding algorithms for nonbinary LDPC codes over GF (q) ", D. Declercq and M. Fossorier, IEEE Trans. Commun., Vol. 55, No. 4, pp. 633-643, Apr. 2007. The Bubble Check algorithm, and its improved version called "L-Bubble Check", which aims to further reduce the complexity of elementary parity nodes while maintaining the same performance in terms of error rates. , propose to reduce the search space of nm best possible combinations of the two input messages. Reference can be made to the following articles for a detailed description of the Bubble Check and L-Bubble Check algorithms, respectively: "Bubble check: a simplified LDPC decoders", E. Boutillon and L. Conde-Canencia, Electronics Letters, vol. 46, No. 9, p. 633-634, 2010, and "Design of a GF (64) -LDPC Decoder Based on the EMS Algorithm", E. Boutillon, L. Conde-Canencia, A. Al Ghouwayel, IEEE Trans. On Circuits and Systems, vol. 60, No. 10, pp. 2644 - 2656, October 2013. 35 Although the Bubble-Check algorithm can reduce the amount of computation in elementary parity nodes, the decoding complexity remains too high to allow hardware and / or software implementations that are truly efficient . An object of the present invention is to propose a method for managing a parity node computation unit of an error correction code decoder capable of being represented by a bipartite graph, as well as a node computation unit. parity of error correction code decoder capable of representation by a bipartite graph. According to a first aspect, the invention relates to a method for managing a parity node computation unit of an error correction code decoder capable of being represented by a bipartite graph comprising at least one parity node, the node of parity being configured to receive first (U) and second (V) input messages, and produce an output message (E), the input and output message elements of the parity node comprising a symbol and a degree the reliability associated with the symbol, the first and second input messages comprising lists U (i) and V (j) of length rim of elements ordered according to their degree of reliability. The proposed method comprises the initialization of a plurality of nbub FIFO-type memories with elements calculated from combinations according to a calculation operation cf) (U (i), V (j)) of elements of the first and second input messages, said calculating operation cf) being such that if ab and cd, then cp (a, c) cp (b, d), and determining values of the output message iteratively. According to the proposed method, an iteration of order m, m being an integer greater than or equal to zero, comprises selecting among the output elements of the FIFO type memories of an element Ss (m) having the best degree of reliability, generating an element of the output message (E) from the selected element Ss (m), and updating the output index of the FIFO memory from which the selected element Ss (m) is from. The proposed method provides an algorithm that enables a more efficient implementation of an error correction code decoder. This implementation indeed allows a significant gain in terms of component area used for an implementation of the decoder on a FPGA type component, and in terms of maximum clock frequency for the execution of the proposed method. In a particular implementation of the proposed method, nbub is equal to four, and the four FIFO type memories are initialized with elements of 4 subsets S1, S2, S3, and S4, generated by combining the degrees of reliability respectively associated with the elements U (i) and V (j) of the input messages, as follows: S1 = [cp (U (0), V (1)); 1 E {0, ..., nm - 1}], S2 = [cp (V (0), U (1)); 1 E {1, ..., nm - 1}], S3 = [cp (U (1), V (1)); 1 E ft ..., nn - il], and 54 = [cp (V (1), U (1)); 1 E 12, ..., nn - he]. Advantageously, the 4 memories of FIFO type are initialized according to their sizes, which are chosen respectively equal to: nn - 1, nnn - 2, - 2, and 2 2nm - 3. another particular implementation of the proposed method, the generation of an element of the output message (E) from the selected element Ss (m) is performed only if the output message E does not already contain said element. This makes it possible to avoid the generation of duplicates in the output message (E). In another particular implementation of the proposed method, the degree of reliability of each element is represented by a log likelihood ratio (LLR). In another particular implementation of the proposed method, the calculation operation cf) is an addition. The proposed method may advantageously be applied to a decoder of non-binary LDPC codes. According to a second aspect, there is provided a parity node calculating unit of an error correction code decoder capable of being represented by a bipartite graph, the parity node being configured to receive first (U) and second ( V) input messages, and outputting an output message (E), the elements of the input and output messages of the parity node including a symbol and a degree of reliability associated with the symbol, the first and second input messages comprising lists U (i) and V (j) of length rim of elements ordered according to their degree of reliability. The proposed computing unit comprises a computer processor operatively coupled to memory means, the memory means being configured as a plurality of nbub FIFO type memories, and an output message generator executing on the computer processor. The output message generator is advantageously configured to initialize the plurality of FIFO type nbub memories with elements calculated from combinations according to a calculation operation cf) (U (i), V (j)) of elements of the first and second input messages, said computation operation cf) being such that if ab and cd, then cp (a, c) cp (b, d), and determining values of the output message iteratively, an iteration of where m is an integer greater than or equal to zero, comprising selecting from the output elements of the FIFO type memories of an element Ss (m) having the highest degree of reliability, generating an element of the message of outputting (E) from the selected element Ss (m), and updating the output index of the FIFO memory from which the selected element Ss (m) is derived. [0004] In a particular embodiment of the computing unit, nbub is equal to four, and the four FIFO type memories are initialized with elements of 4 subsets S1, S2, S3, and S4, generated by combining the degrees respectively associated with the elements U (i) and V (j) of the input messages, as follows: S1 = [cp (U (0), V (1)); 1 E {0, ..., n ,, - 1}], S2 = [cp (V (0), U (1)); 1E {1, ..., n ,, - 1}], S3 = [cp (U (1), V (1)); 1 E ft ..., n'n2 - Ill, and S4 = [cp (V (1), U (1)); 1 E 12, ..., nn z - Ill. In another particular embodiment of the computation unit, the generation of an element of the output message (E) from the selected element Ss (m) is performed only if the output message E not already contain said element. This makes it possible to avoid the generation of duplicates in the output message (E). In another particular embodiment of the computing unit, the degree of reliability of each element is represented by a log likelihood ratio (LLR). In another particular embodiment of the calculation unit, the calculation operation cf) is an addition. According to another aspect, there is provided a computer program, loadable in a memory associated with a processor, and comprising portions of code for implementing the steps of the proposed method during the execution of said program by the processor, and a set of data representing, for example by compression or encoding, said computer program. Another aspect relates to a non-transitory storage medium of a computer executable program, comprising a data set representing one or more programs, said one or more programs including instructions for executing said one or more programs. programs by a computer comprising a processing unit operatively coupled to memory means and an input / output interface module, driving the computer to manage a parity node computing unit according to the proposed methods. The proposed methods are particularly well, though not exclusively, for an elementary parity node of a non-binary LDPC error correction code decoder. But they are also suitable for any parity node of an error correction code decoder capable of being represented by a bipartite graph comprising at least one parity node. Other features and advantages of the present invention will become apparent in the following description of nonlimiting exemplary embodiments, with reference to the appended drawings, in which: FIG. 1 is a diagram illustrating a bipartite graph representing a correction code of errors on which the proposed methods can be implemented; Figure 2a shows a decoding system to which the invention can be applied; FIGS. 2b and 2c show a parity node of an error correction code decoder to which the invention can be applied; Figures 3a, 3b and 3c show an elementary parity node to which the invention can be applied; FIG. 4 is a table showing certain aspects of the calculations carried out within an elementary parity node according to the LBubble Check algorithm; Figs. 5a and 5b are tables showing certain aspects of the calculations performed within an elementary parity node according to a particular mode of the proposed method; FIGS. 6a and 6b are tables representing certain aspects of the calculations carried out within an elementary parity node according to a particular mode of the proposed method; Figure 7 is a diagram illustrating certain aspects of implementation of the proposed method; FIG. 8 represents an architecture of an elementary parity node for implementing the proposed method; Figure 9 is a diagram illustrating certain aspects of implementation of the proposed method. [0005] In the following detailed description of embodiments of the invention, many specific details are presented to provide a more complete understanding. Nevertheless, those skilled in the art may realize that embodiments can be practiced without these specific details. In other cases, well-known features are not described in detail to avoid unnecessarily complicating the description. The invention will be described hereinafter in the nonlimiting context of a parity node of an error correction code decoder. Figure 2a shows a decoding system (100) including a non-binary LDPC decoder (101) according to different embodiments. The system (100) further comprises an input-output interface (102), processing means (103) and memory means (104). A noisy version y 'of the code word is received after transmission on a communication channel or after reading on a storage medium. The acquisition of the word to be decoded y 'by the system (100) is effected by the interface (102). The acquisition phase of the data to be decoded is followed by a data processing phase, performed for example by a data processing unit 105, which controls the non-binary LDPC decoder 101. The data processing unit 105 may be a computer, a computer network, or other apparatus comprising a processor, a memory, a data storage unit 104, and other associated hardware elements such as a network interface and a media player for reading and write a removable storage medium (not shown in the figure). The removable storage medium may be, for example, a compact disc (CD), a digital video / versatile disc (DVD), a flash disk, etc. The removable storage medium contains instructions which, when executed by the data processing unit 105, cause this unit 105 to perform the data acquisition and / or processing part of the exemplary embodiments of the proposed method. described herein. The removable storage medium may further comprise one or more computer programs for implementing and executing the LDPC decoding on the decoder 101, and then storing the results of this decoding in memory 104. Certain portions of the LDPC decoder may be stored in memory. as ordered sequences of instructions on a given instance of the removable storage medium, removable device or local data storage 104, to be loaded into the memory for execution by one or more processors. Specifically, software instructions or computer-readable program code for carrying out embodiments may be stored, either temporarily or permanently, in whole or in part, on a non-transitory computer-readable medium such as a compact disc (CD), a local or remote storage device, a local or remote memory, a floppy disk, or any other computer readable storage device. Although the LDPC decoder is described as a program residing in a memory, the LDPC decoder can be implemented in hardware form, as an application specific integrated circuit (ASIC), or in the form of a combination of hardware elements. and software, such as for example a software program intended to be loaded and executed on a FPGA (Field Programmable Gate Array) type component. FIG. 2b illustrates an exemplary architecture of the decoder (101), which comprises a parity node processor (110) connected to d, processors nodes of variable V Ni, i = 1 ... d, (of which only three ( 111), (112), and (113) are shown in the figure) and d, memory banks, for example RAM, RAMi, i = 1 ... d, of which only three (114), (115) , and (116) are shown in the figure). The parity node processor (110) receives in parallel input messages V2Ci, i = 1 ... d, respectively from memory banks RAMi, i = 1 ... d ,. It generates at output, after calculation, output messages C2Vi, i = 1 ... d, respectively transmitted to the processors variable nodes V Ni, i = 1 ... d, (111), (112), and (113). The processors variable nodes V Ni, i = 1 ... d, (111), (112), and (113) then generate V2C messages which will be recorded in the memory banks before being input to the node processor. of parity. The parity node (110), which receives input messages V2Ci, i = 1 ... dc, and generates output messages C2Vi, i = 1 ... d, is said to be of degree d ,. As previously indicated, messages processed by a parity node include pairs of information representing a possible symbol and a reliability value of that information. The operations necessary for calculating reliability values represented by probabilities knowing the received code word comprising a large number of multiplications, it is common to use a representation of the reliability value of a logarithmic symbol, which makes it possible to replace multiplications by additions. This notably makes it possible to optimize the implementations of the decoding on dedicated components (for example of the ASIC or FPGA type). For example, Log-Likelihood Ratio (LLR) logarithms can be defined, which correspond to the reliability information exchanged in the messages between the parity nodes and the nodes. variable. The log likelihood ratio LLR is used to measure the reliability of the symbols while eliminating the multiplication operations in the decoder and providing better immunity against the errors of quantification. A definition of the LLR denominator uses the probability of a fixed reference symbol, for example the symbol a °. Thus, if we note by 13 the noisy symbol received at the output of a transmission channel, the definition of the LLR of a symbol ai, i = 0, q-1 is given by the equation below: LLR ( ai) = ln (r (key) P) where Pr (ailp) is the conditional probability of a 'Pr (a ° 1fl) knowing 13. The above definition can lead to negative LLR values, especially when the probability of ai is less than that of a. However, for an implementation of the proposed method on a fixed point digital architecture, it is preferable to work with positive values of LLR. A first solution for doing this is to use the symbol with the smallest reliability as a reference symbol. According to the definition given above, the most reliable symbols are those with the highest LLR values. This leads to an increase in the number of bits required to represent the LLR values, which increases the size of the memories required by the decoder. From this point of view, it is more efficient to define LLRs so that the most reliable symbols have the smallest LLRs (while keeping all LLR values positive). For this, we can consider the definition below where we apply the absolute value to the LLR values calculated using the most reliable symbol as a reference symbol. Pr (adlfl) LLR (a1) = 11n (male) I, with ad = arg max [Pr (ailp); i = 0, ..., a - Thus the most reliable symbol is the one with the smallest LLR value, namely the value O. This allows, for example when the LLR values are coded on 5 bits, to make sure that the smallest value of LLR is equal to the lower limit of the positive value range represented with 5 bits (from 0 to 31). Conversely, if we work with positive and negative values of LLR, while coding these values on 5 bits, the LLR dynamics may be less well represented, since the extreme values of LLR do not exist. have not necessarily been normalized to correspond to one of the bounds of the range represented with 5 bits (from -15 to 16 for example) Thus, in the Galois field GF (4), if the 4 probabilities of the elements cro, a1 , a2 and a3 LLR gives us: are respectively (0.2; 0.5; 0.29; 0.01), the first definition of LLR1 = (log (0.2 / 0.2), log (0.5 / 0.2), log (0.29 / 0.2), log ( 0.01 / 0.2)) = (0.9163 0.6678 -2.9957). The second definition gives us: LLR2 = (log (0.2 / 0.01), log (0.5 / 0.01), log (0.29 / 0.01), log (0.01 / 0.01)) = 5 (2.9957 3.9120 3.6636 0) (we go from LLR1 to LLR2 by adding a constant factor of 2.9957) The third definition gives us: LLR3 = (log (0.5 / 0.2), log (0.5 / 0.5), log (0.5 / 0.029), log (0.5 / 0.01)) = (0.9163 0.2485 3.9120). Data processing within the parity node processor (110) can be performed in accordance with the so-called forward-backward algorithm, wherein a parity node of degree d is decomposed into 3. (dc - 2) nodes of so called Elementary Check Node (or "ECN"), each elementary parity node receiving as input two U and V messages and generating an E message. FIG. 2c shows an example of an architecture a parity node processor (120) of degree d, equal to 6, comprising 12 elementary parity nodes (121-132). The parity node processor (120) receives in parallel 6 messages V2C = 1 ... 6, and generates 6 output messages C2Vi, i = 1 ... 6. Each elementary parity node performs a parity test on its nodes. U and V input messages to generate an output message representing the combinations of the input messages having the best likelihood. For example, in the case where the likelihood is represented by a value of LLR as defined above, the output message E of an elementary parity node 25 will be obtained by the following relation: E (x) = minxi, EGF (q), x, EGF (0 {Kxu) + V (xv)} xueDr, -x (Eq.1) Figure 3a illustrates a parity node (150a) of a binary decoder. The parity node receives two input messages, and generates an output message, which is the sum of the values of the input messages. For the sake of simplifying the disclosure, the likelihood values are not considered here. The input messages (or variables), denoted U2 and V2, and the output message (or variable), denoted E2, have values in GF (2), illustrated in the figure by the pair of values {0, 1}. The two input combinations (0,0) and (1,1) produce the output value "0", in accordance with the usual law of addition in GF (2), while the two input combinations (0,1 ) and (1,0) produce the output value "1". FIG. 3b illustrates an elementary parity node (150b) of a non-binary decoder whose input and output data have values in the GF Galois body (64). The input variables, denoted U64 and V64, and the output variable, denoted E64, have values in a Galois field of order 64 (denoted GF (64)), illustrated in the figure by the set of values {0,1, ..., 63}. Since the output variable has values in GF (64), there are 64 valid input combinations for each of the 64 output values determined according to the addition law chosen in GF (64). The foregoing may be generalized to an elementary parity node of a non-binary decoder whose two input variables Uq and Vq and the output variable Eq are valued in a Galois field of order q, denoted GF. (q). For each of the q possible values of the output variable Eq determined according to the addition law chosen in GF (q), there are q combinations of the input variables Uq and Vq. Moreover, the operation performed within the elementary parity node is not limited to the addition. FIG. 3c illustrates an elementary parity node (150c) whose two input variables Uq and Vq and the output variable Eq have values in a Galois field of order q, denoted GF (q). The output variable is linked to the input values by a parity relation, denoted cf), so that we have Eq = (p (Uq, V). [0006] With reference to FIGS. 2b and 2c, the parity node processor (110), (120) receives in parallel with input messages V2Ci, j, = 1 ... d, performs on these input variables a test of parity corresponding to the equation Eq. 1 set forth above, and generates output messages C2Vi, i = 1 ... d, respectively transmitted to the processors variable nodes VNi, j, = 1 ... d, (111), (112), and (113) corresponding. [0007] As illustrated in FIG. 3c, the processing of the input messages within the processor (110) can be decomposed on three processing layers each comprising d, - 2 elementary parity nodes (121-132). Each elementary parity node (121-132) receives two U and V messages each containing q couples (symbol with values in GF (q), likelihood value associated with the symbol) and generates an output message E containing the best q combinations of values of the two input messages U and V, that is, those with the highest likelihood. In one embodiment, the message E is defined by the equation Eq. 1 above. This equation corresponds to the extraction of q minimum values in the matrix, denoted TE, of the sums values, defined by TE [i, j] = U [i] + V [j] for (i; j) E {0, ..., q - 1} 2. The smaller sums of the log likelihood values make it possible to identify the components in GF (q) with the highest likelihood. [0008] The EMS ("Extended Min Sum") algorithm recommends limiting the search for these minimum values to the best nm symbols in the incoming messages of an elementary parity node to calculate the values of the output message, with nm "q. It is then possible to consider a representation of the input and output messages by vectors of size nm noted respectively U [k], V [k], and E [k]. [0009] In one embodiment, the values of the output message can be obtained by selecting the nm smaller values among the Tl m 2 sums of nm values of the first input message and nm values of the second input message. This step of selecting nm values from those calculated from the values of the input messages can be illustrated by a table representing the matrix TE, of size nm x nm, whose elements TE [i, j] are determined by the sum values U [i] and V [j] of the input messages: TE [i, j] = U [i] + V [j] for (i; j) E {0, ..., nm - 1 } 2. In addition, the reliability values of the input messages U and V are ordered in descending order of reliability, the value U [0] (respectively V [0]) corresponding to the best reliability and the value U [nm] (respectively V [nm]) at the worst. When the reliability values correspond to likelihood ratio logarithms LLR, these values are therefore ordered in ascending order, the lowest value of LLR representing the best reliability. In this case, the elementary parity node update algorithm EMS proposes an iterative method of scanning the TE matrix candidates to select the smaller nm output values and thereby determine the values of the vector E [k ] corresponding to the output message of the node. This EMS algorithm comprises a first initialization phase which consists of introducing the values of the first column of TE into a sizer of size nm. The smallest value in the sorter is then computed and copied into the output vector E [k]. The value extracted from the sorter and copied back into the output vector is then replaced by its right-hand neighbor in the matrix TE, which is introduced into the sorter. A new iteration of the process is then performed by calculating the smallest value in the sorter. To illustrate the principles of the EMS algorithm on an example, consider the example of two input vectors U64 and V64 of size rim = 8. It is thus assumed that in a preliminary step the 8 components among the 64 components corresponding to the possible values of a symbol of each value input message in GF (64) having the best likelihood values. It is further assumed that these 8 best LLRs for U64 and V64 are given by: U64 = (0,1,2,3,3,4,5,5) and V64 = (0,0,1,2,4, 4,5,6). The following table shows the 64 candidates determined according to the U64 and V64 vectors: (0) (1) (2) (3) (3) (4) (5) (5) (0) 0 1 2 3 3 4 5 5 (0) 0 1 2 3 3 4 5 5 (1) 1 2 3 4 4 5 6 6 (2) 2 3 4 5 5 6 7 7 (4) 4 5 6 7 7 8 9 9 (4) 4 5 6 7 7 8 9 9 (5) 5 6 7 8 8 9 10 10 (6) 6 7 8 9 9 10 11 11 The EMS algorithm for updating the elementary parity nodes is described below for the U64 examples and V64 given above, for which rim is chosen equal to 8. [0010] The implementation of this algorithm can use a sorter, that is to say a set of rim memory registers in which are recorded rim candidates which constitute a selection set S from which a candidate is selected, extracted from the sorter then replaced by another candidate at each iterative cycle of the algorithm. [0011] The initialization phase consists, as indicated above, in selecting the values of the first column of TE, as shown in the table below: (0) (1) (2) (3) (3) (4) ( 5) (5) (0) 0 1 2 3 3 4 5 5 (0) 0 1 2 3 3 4 5 5 (1) 1 2 3 4 4 5 6 6 (2) 2 3 4 5 5 6 7 7 ( 4) 4 5 6 7 7 8 9 9 (4) 4 5 6 7 7 8 9 9 (5) 5 6 7 8 8 9 10 10 (6) 6 7 8 9 9 10 11 11 The initial elements of the set S of size selection rim with the elements {TE (0, j) lj = 0, ..., nm - 1}. In a variant making it possible to reduce the necessary calculations, the calculations of TE values have not been carried out beforehand, and this initialization phase comprises the determination of each of the elements {TE (0, j) 1 = 0,. .., nm - 1} and their storage in the sorter. It consumes rim cycles of calculation. At the end of this initialization phase, the rim memory registers of the sorter that contain the selection values therefore contain the computed values of the first column of TE: Sorter = 0 0 1 2 4 4 5 6 A first loop iteration The calculation is then initiated by extracting the smallest element from the set S, and then replacing it in this selection set with the smallest element of the row of the matrix TE on which the extracted element was located. In the example considered, the element TE (0,0) is thus extracted for this first iteration and then replaced by the element TE (0,1) which is computed, if this calculation has not already been calculated. been done. The set S is thus updated and becomes S = {TE (1, 0); TE (0, j) for j = 1; ...; nm - 1} = {1; 0; 1; 2; 4; 4; 5; 6}. In addition, we have E [0] = T1 (0,0), that is E = (TI (0,0); x; x; x; x; x; x; x). The algorithm loops back and performs a second extraction and replacement iteration in the set S. The extracted element is TE (0,1), which is replaced by TE (1,1). S is updated to become ITE (1.0); TE (1, 1); TE (0, j) for j = 2; ...; nm - 1} = {1; 1; 1; 2; 4; 4; 5; 6}. In addition, we have E [1] = T1 (0,1), that is E = (T1 (0,0), TI (0,1); x; x; x; x; x; x). [0012] During a third extraction and replacement iteration in the set S, the element TE (1,0) is extracted from S, and is replaced by TE (2,0) .S becomes {TE (2, 0); TE (1,1); TE (0, j) for j = 2; ...; nm - 1} = {2; 1; 1; 2; 4; 4; 5; 6}. In addition, E [2] = T1 (1,0) is E = (T1 (0,0), T1 (0,1), TI (1,0), x, x, x, x, x). . During a fourth iteration of extraction and replacement in i.o the set S, the element TE (1,1) is extracted from S, and is replaced by TE (2,1). S becomes {TE (2,0); TE (2.1); TE (0, j) for j = 2; ...; nm - 1} = {2; 2; 1; 2; 4; 4; 5; 6}. In addition, E [3] = T1 (1,1) is E = (T1 (0,0), T1 (0,1), T1 (1,0), Tx (1,1), x; ; x; x). During a fifth iteration of extraction and replacement in the set S, the element TE (0,2) is extracted from S, and is replaced by TE (1,2). S 15 becomes {TE (2.0); TE (2.1); TE (1,2); TE (0, j) for j = 3; ...; nm - 1} = {2; 2; 2; 2; 4; 4; 5; 6}. In addition, we have E [4] = Tx (0.2), ie E = (T1 (0,0), T1 (0,1), T1 (1,0), Tx (1,1), Tx ( 0.2) x; x; x). In a sixth iteration of extraction and replacement in the set S, the element TE (2.0) is extracted from S, and is replaced by TE (3.0). S becomes 20 ITE (3, 0); TE (2.1); TE (1,2); TE (0, j) for j = 3; ...; nm - 1} = {3; 2; 2; 2; 4; 4; 5; 6}. In addition, we have E [5] = T1 (2.0), that is E = (T1 (0,0); T1 (0,1); T1 (1,0); T1 (1,1); 0.2), TI (2.0) x (x). During a seventh iteration of extraction and replacement in the set S, the element TE (2.1) is extracted from S, and is replaced by TE (3.1). S 25 becomes {TE (3.0); TE (3.1); TE (1,2); TE (0, j) for j = 3; ...; nm - 1} = {3; 3; 2; 2; 4; 4; 5; 6}. In addition, we have E [6] = T1 (2.1), ie E = (T1 (0,0), T1 (0,1), T1 (1,0), Tx (1,1), T1 ( 0.2), T1 (2.0), T x (2.1), x). The search ends with the extraction of an nm-th value in the set S, the extracted value being chosen as above as the smallest value of the set S. The element TE (1,2) is thus extracted from S, and E [7] = TE (1,2) is E = (T1 (0,0); T1 (0,1); T1 (1,0); T1 (1,1); T1 (0.2), T1 (2.0), T1 (2.1), T1 (1.2), The result vector of size nm is thus determined after nm iterations of the search cycle and 2 x rim computation cycles. The Bubble-Check algorithm reduces the complexity of elementary parity node computation computations compared to the EMS algorithm. It exploits the following properties of the elements of the matrix TE, in the case where the input lists are ordered according to the value of LLR in increasing order: Vj E [1, rim] and i <i ', then TE ( i, j) <TE (i ', j) and Vi E [1, nm] and j <j', then TE (i, j) <TE (i, j ') To draw the following three main consequences: - It is possible to exclude, a priori, a subset of the elements of the matrix TE, which amounts to excluding a part of the exploration zone from this matrix; The size of the selection set S can be reduced, for example by a factor of 2; Predefined selection paths for the substitutes of each element of the initial set S (called "bubbles") can be determined. The L-Bubble Check algorithm enhances the Bubble Check algorithm by defining four "bubble" paths limited to the first two rows and first two columns of the matrix, as shown in Figure 4. This allows you to limit Search space from the minimums to 3 x rim combinations. [0013] To resume the previous example, we consider a selection set S 'of size nm = nn = 4. The subset Excl of elements of the matrix 2 TE defined by Excl = {TE (i; j) for (i; j) = {2, ..., nm - 1} 2} is excluded from the exploration zone of the matrix TE, that is to say that it is determined in advance that no elements of this area will be a candidate that can be integrated into the selection set S. The initialization phase of the EMS algorithm is resumed, but with this selection set twice smaller. Thus, a set S 'of selection of size nm / 2 is calculated by calculating the elements {TE (i3O) 1i = 0, ..., nm' - 1}. The initialization phase consists, as indicated above, in select the nm values of the first TE column, as shown in the table below: (0) (1) (2) (3) (3) (4) (5) (5) (0) 0 1 2 3 3 4 5 5 (0) 0 1 2 3 3 4 5 5 (1) 1 2 3 4 4 5 6 6 (2) 2 3 4 5 5 6 7 7 (4) 4 5 6 7 7 8 9 9 (4) 4 5 6 7 7 8 9 9 (5) 5 6 7 8 8 9 10 10 (6) 6 7 8 9 9 10 11 11 These nm values constitute the "bubbles" of the L-Bubble-Check algorithm . In the example above, the set S 'is thus initialized to S' = {0; 0; 1; 2}. The subset Excl of elements of the TE matrix defined by Excl = {T (i; j) for (i; j) = {2, ..., nen, - 1} 2} (204) is excluded from the travel zone of the matrix TE, and predefined nm = 4 paths (P (200, 201, 202, 203) of replacement after extraction of the elements of S ', the first element of each predefined path respectively corresponding to the 4 elements of the initial set S 'With reference to FIG. 4, the first element TE (0,0) of the initial set S is also the first element of a horizontal course (200) defined by the subset P0 = {TE (0; j) for j = {0, ..., nen, - 1}}, the second element TE (1,0) of the initial set S is also the first element of a horizontal course (201) defined by the subset P1 = {TE (1; j) for j = {0, nm - 1}}, the third element TE (2,0) of the initial set S is also the first element a path (202) defined by the subset P2 = {TE (2; 0) u TE (i; 1) for i = {2, ..., nm - 1}}, and the fourth element TE (3.0) of the initial set S is also the first element of a vertical path (203) defined by the subset P3 = {TE (i; 0) for i = 30162 5 9 20 {n; n - 1,, nm - 1}}. Note that the four courses (P (200, 201, 202, 203) shown in FIG. 4 are defined within the non-excluded area of the course, the excluded area (204) corresponding to the subset Excl = { TE (i; j) for (i; j) = 5 {2, ..., 8} 2}. A first iteration of computation loop is then engaged by extracting the smallest element of the set S ', then by replacing it in this selection set with the element immediately adjacent to the extracted element in the path of the matrix TE corresponding to the element extracted, for example by calculating the address of the element replacing in the set S 'according to the predefined path corresponding to the element extracted from S', calculating the value of the replacement element according to its address, then updating the set S 'with the replacement element. In the example under consideration, the element TE (0,0) is thus extracted for this first iteration 15, the address is calculated. e of the substitute element according to the path P0 = {TE (0; j) for j = {0, ..., nm - 1}}, which leads to the address of the element TE (0,1) which is calculated. The set S 'is then updated and becomes S' = {TE (0, 1); TE (i, 0) for i = 1; ...; n; n - 1} = {1; 0; 1; 2}. We also have C [0] = T1 (0,0), which is C = (Tx (0,0); x; x; x; x; x; x; x) = (0; x; x; x ; x, x, x, x). The algorithm loops back and performs a second extraction and replacement iteration in the set S '. A smaller element is extracted from the set S ', and in the example considered, the extracted element is TE (1.0). The path corresponding to this element is P1 = {TE (1; j) for j = {0, ..., nm - 1}}. The address of the neighboring element of the extracted element TE (1.0) is thus calculated according to this path, which leads to the element TE (1,1), which is calculated. S 'is updated with the calculated value and becomes {TE (0,1); TEM 1); TE (i, 0) for i = 2; ...; nm '- 1} = {1; 1; 1; 2}. In addition, we have C [1] = T1 (1,0), which is C = (T1 (0,0), TI (1,0), x; x; x; x; x; x) = (0; 0; x, x, x, x, x, x). During a third extraction and replacement iteration in the set S ', the element TE (0,1) is extracted from S'. The path corresponding to this element is P0 = {TE (0; j) for j = {0, ..., nm - 1}}. The address of the neighboring element of the extracted element TE (0,1) is thus calculated according to this path, which leads to the element TE (0,2), which is calculated. S 'becomes {TE (0, 2); TE (1,1); TE (i, 0) for i = 2; ...; n; n - 1} = {2; 1; 1; 2}. In addition, we have C [2] = T1 (0,1) that is = (Tx (0,0); Tx (1,0); Tx (0,1); x; x; x; x; x) = (0; 0; 1; x; x; x; x; x). During a fourth extraction and replacement iteration in the set S ', the element TE (1,1) is extracted from S'. The path corresponding to this element is P1 = {TE (1; j) for j = {0, ..., nm - 1}}. The address of the neighboring element of the extracted element TE (1,1) is thus calculated according to this path, which leads to the element TE (1,2), which is calculated. S 'becomes {TE (0,2); TE (1,2); TE (i, 0) for i = 2; ...; n; 72 - 1} = {2; 2; 1; 2}. In addition, C [3] = 71 (1,1) is C = (Tx (0,0); T1 (0,1); Tx (1,0); Tx (1,1); x; x x; x) = (0; 0; 1; 1; x; x; x; x). During a fifth iteration of extraction and replacement in the set S ', the element TE (2.0) is extracted from S'. The path corresponding to this element is P2 = {TE (2; 0) u TE (i; 1) for i = {2, ..., nm - 1}}. The address of the neighboring element of the extracted element TE (2.0) is thus calculated according to this path, which leads to the element TE (2.1), which is calculated. S 'becomes {TE (0,2); TE (1,2); TE (2,1); TE (i, 0) for i = 3; ...; not;' - 1} = {2; 2; 2; 2}. In addition, C [4] = 71 (2.0), that is C = (71 (0,0), 71 (0,1), T1 (1,0), T1 (1,1), TI (2), , 0); x; x; x) = (0; 0; 1; 1; 1; x; x; x). During a sixth iteration of extraction and replacement in the set S ', the element TE (0,2) is extracted from S'. The path corresponding to this element is again P0 = {TE (0; j) for j = {0, ..., rim - 1}}. The address of the neighboring element of the extracted element TE (0.2) is thus calculated according to this path, which leads to the element TE (0.3), which is calculated. S 'becomes {TE (0.3); TE (1,2); TE (2.1); TE (i, 0) for i = 3; ...; n; ,,, - 1} = {3; 2; 2; 2}. In addition, C [5] = 71 (0.2) is C = (71 (0,0); 71 (0,1); T1 (1,0); T1 (1,1); T1 (2); , 0); Tx (0,2); x; x) = (0; 0; 1; 1; 1; 2; x; x). [0014] During a seventh iteration of extraction and replacement in the set S ', the element TE (1,2) is extracted from S'. The path corresponding to this element is again P1 = {TE (1; j) for j = {0, ..., rim - 1}}. The address of the neighboring element of the extracted element TE (1,2) is thus calculated according to this path, which leads to the element TE (1,3), which is calculated. S 'becomes ITE (0.3); TE (1, 3); TE (2.1); TE (i, 0) for i = 3; ...; n; ,, - 1} = {3; 3; 2; 2}. In addition, C [6] = 71 (1,2) is C = (71 (0,0); 71 (0,1); T1 (1,0); T1 (1,1); T1 (2); , 0); T1 (0,2); TI (1,2); x) = (0; 0; 1; 1; 1; 2; 2; x). The search ends with the extraction of an nm-th value in the set S ', the extracted value being chosen as before as the smallest value of the set S'. The element TE (2,1) is thus extracted from S ', and we have C [7] = TE (2,1) that is C = (T1 (0,0); T1 (0,1); 1.0), T1 (1.1), T1 (2.0), T1 (0.2), T1 (1.2), TE (2.1)) = (0; 0; 1; 1; 1; 2; 2; 2). The result vector of size nm is thus determined after nm iterations of the search cycle and 2 x nm computation cycles. FIG. 5a illustrates an example of implementation of the proposed algorithm on the same matrix as that of FIG. 4. [0015] In one embodiment, the proposed method defines an exploration path exclusion zone of the matrix, a selection set S "of size 4x nm-4, and an exploration path of the matrix for each of the elements. The case of an elementary parity node of a non-binary LDPC decoder, which receives two U and V messages at the input and generates an E message at the output. that the non-zero values of the parity matrix of the LDPC decoder belong to the Galois field GF (q), and each input message contains nm pairs (symbol, reliability) As described above, the reliability associated with a symbol in a message is represented by a log likelihood ratio LLR with positive values The elements of each input message are further ordered according to their likelihood values, for example in ascending order with the first corresponding element. t to the most likely symbol. There are thus two input messages U and V, with U = [U (0), ..., U (nm - 1)] and V = [V (0), ..., V (nm - 1) where the U (i) and V (i) are pairs U (i) = ULR) and V (i) = (Vb, VAR) with II.FeGF (q), VbeGF (q), and ULR and VARs are the normalized LLRs associated with the UDF and Vb symbols, respectively. The values ULR and VAR respectively check: UAR = 0 and ULR 0, i = 1, ... Jim, and TIR = 0 and VIi, LR = 1, ---, nin- In this embodiment, the parameter nm is chosen lower than the order q of the Galois body considered. It will preferably be chosen much lower than this order (nm "g), in particular to satisfy the complexity constraints of the decoding calculations. 30162 5 9 23 The search for values satisfying the equation of parity Eq. 1 stated above is limited to the best nm symbols in the incoming messages, to generate an output message in the form of a vector E of size nm. Here again we consider a selection set, denoted S, of size nm> nn2 in the case where nm is even, and nm> nm1 in the opposite case. For example, it is possible to choose nm 2 even (for example equal to 8 when q is equal to 64), and nm equal to 1-1722 to minimize the memory occupancy of the selection set S. The subset Excl of elements of the matrix TE defined by Excl = {TE (i; j) for (i; j) = {tn7, ..., nm - 1} 2} is excluded from the exploration zone of the matrix TE, c that is, it is determined beforehand that none of the elements of this set will be a candidate that can be integrated into the selection set S. The definition of this exclusion zone makes it possible to render the routes defined for the selection set elements as short as possible, as these runs are defined outside this exclusion zone. These paths must cover each of the elements of the matrix TE that does not belong to the Excl excluded subset. In this embodiment, the initial elements of the selection set are chosen such that unidirectional paths can be defined for each of them. In this way, it is ensured that the elements of each course are already sorted (either in the horizontal direction or in the vertical direction), whereas this was not the case in the previous example of the L-Bubble check. A partition of the exploration zone of the matrix TE is thus defined in four subspaces Si, S2, S3 and S4, as illustrated in FIG. 5b. Each of these subspaces corresponds to a path illustrated in Figure 5a. We see that for 4 bubbles we will have two horizontal paths (210, 211) and two vertical paths (212, 213). Since the LLR values of the input messages are already ordered (in ascending order), the elements of these paths, which are either only horizontal, or only vertical, are also ordered. We thus define four subsets (Sk) k = 1 .... of elements of the matrix TE which respectively correspond to an exploration path of this matrix respectively. FIG. 5b illustrates these four subsets in the case of a square TE matrix of size 12. In one embodiment, the four subsets (Sx) ki ..... 4 are defined by the following equations: = [U (0) + V (/); / E {0, - 1}] S2 = [V (0) + U (/); E {1,, nm - 1}] 53 = [U (1) + V (/); l E {1,, nm - 1}] 54 = [V (1) + U (/); l E {2, ..., nm - 1}] The four scan paths of the TE matrix include 4 x (rim - 1) positions in the matrix. [0016] In another embodiment of the proposed method, illustrated by FIGS. 6a and 6b, the four subsets (Sx) ki..., 4 to which are respectively associated "bubble" paths, are defined by the following equations : = [U (0) + V (/); / E {0, nm - 1}] S2 = [V (O) + U (/); E {1,, nm - 1}] S3 = [U (1) + V (/); / E ..., - n2m - S4 = [V (1) + U (/); E 12, - 11] The four scan paths of the TE matrix include 3.rim - 4 positions in the matrix, so that the exploration space is even smaller. FIG. 7 illustrates an exploration algorithm (300) of the matrix TE according to a particular embodiment. The candidates to be selected after comparison, that is to say, to use the terminology used above, the elements of the selection set, are noted below c1, c2, c3, and c4. [0017] The values of the elements of the selection set are initialized (301) to the initial values of the subsets (Sk) k = 1 .. 4, which correspond to the initial elements of the paths they define respectively: c1 = S1 (0 ), c2 = S2 (O), c3 = S3 (O), c4 = S4 (0). A set of values E corresponding to the output message is initialized with predetermined values. A loop index k is also initialized, for example at the value 1: k = 1. An iteration of the algorithm is then performed, with the selection (303) of a value, denoted C, from among the values of c1. , c2, c3, and c4. As previously explained, the elements c1, c2, c3, and c4 may be pairs (symbol, reliability), in which case the selection is made on the basis of a reliability value criterion. In other words, selecting C e) amounts to selecting the most reliable pair among the couples of the selection set, in this example among c1, c2, c3, and c4. Different representations of the reliability can be adopted, among which log likelihood ratios LLRs as defined above. In this case, the selection of C e) can consist in selecting the ci associated with the smallest value of LLR. As a variant, the elements c1, c2, c3, and c4 manipulated by the algorithm may directly correspond to a reliability value, for example a value of LLR. The output message E is then updated (304) with the selected value: E = E u {cs (k)}. The selection set is then updated (305) by replacing the value of the selected item ci with the next value in the corresponding subset Si. For example, if the selected pair C 's (k) matched the subset Ss (m) of the corresponding subset (ie the path) Ss, this element is replaced in the selection set by the value or the value pair corresponding to Ss (m + 1). The index of iteration loop k is then compared (306) with the value nm to verify whether or not iterations are performed. In the case where fewer nm iterations have been performed, the loop index k is incremented (308) and a new iteration is initiated. Otherwise, the algorithm ends (307). In another embodiment (not shown in FIG. 7), the selection (303) of the best reliability value or the pair with the best reliability c..k) includes the verification that the candidate to be selected from the set of {ci} is not already in the set of previously selected values or couples. If C e) is already in the set E, it is not selected again and is discarded by being marked as an invalid candidate. Marking a candidate with invalid status can be done in a variety of ways. For example, when the algorithm manipulates pairs (symbol, reliability represented by an LLR), the LLR value of the couple that one wishes to mark as invalid can be forced to a very low value, for example close to or equal to zero. Forcing the value of LLR to 0 ensures that the algorithm generates a sorted output. Otherwise, since an elementary parity node is capable of generating invalid pairs, another elementary parity node (for example, with reference to FIG. 2c, a node of a lower layer) receiving these pairs will compare valid couples. with invalid couples. In order to simplify the processing, it is considered that the invalid couples are valid for performing the comparison of the algorithm. For example, if we compare with an iteration of the given algorithm the values respectively corresponding to the values of the subsets S1 (m1), S2 (m2), S3 (m3) and S4 (m4), in which S3 ( m3) is an invalid torque, and the pair S2 (m2) has the best reliability (for example the smallest LLR value) among the four candidates S1 (m1), S2 (m2), S3 (m3) and S4 ( m4), the ECN node will select and include in the output set the pair S2 (m2). However, it is possible that the torque S3 (m3 + 1) is valid (in that it has never been selected yet to be included in the output set). If the LLR value of the torque S3 (m3) is not forced to zero, we can not be sure that the reliability of S3 (m3 + 1) is less good (and therefore greater in value of LLR) than that of S2 (m2). Thus, forcing 0 of the reliability of S3 (m3) makes it possible to rule out the pair S3 (m3) of the comparison to be made to compare S1 (m1), S2 (m2), S3 (m3 + 1) and S4 ( m4), instead of S1 (m1), S2 (m2), S3 (m3) and S4 / m4) - In another embodiment (not shown in FIG. 7), updating (305) the pair or the selected value is also accompanied by the update, in the same way as the selection, of any invalid declared torque or value, in order to minimize the presence of invalid couples in the output message of the parity node. even though the invalid pair is not the one that was selected when comparing the current iteration. FIG. 8 shows an example of an elementary parity node architecture for implementing the proposed exploration algorithms. This architecture comprises four adders (403a1, 403a2, 403b1, 403b2), four registers (406a, 406b, 406c, 406d), four FIFO type memories (405a, 405b, 405c, 405d) and a comparison module (404). The elements of the input messages U and V are recorded in memory means (401, 402), for example of the RAM type. The elements of each input message U and V are sorted according to their reliability in order of decreasing reliability. In the example represented in FIG. 8, reliability values expressed by log likelihood ratios LLRs are considered, which are therefore ordered in ascending order, the smallest value of LLR representing the greatest reliability. The input messages each include nm elements, each element comprising a pair (symbol, LLR), ordered from the element having the smallest LLR value (for the elements U (0) and V (0), respectively) to an element having the largest LLR value (the elements U (nm) and V (nm), respectively). The registers (406a, 406b, 406c, 406d) store the elements U (0), U (1), V (0) and V (1), respectively. The elementary parity node (400) includes an adder module (403a-403b) configured to perform, on the one hand, the sums (403a1, 403a2) (in the Galois body GF (q) considered) of the element V (j ') from the memory V (402) and, respectively, from the element U (0) recorded in the first register (406a), and from the element U (1) recorded in the second register (406b) , and, on the other hand, the sums (403b1, 403b2) (in the Galois field GF (q) considered) of the element U (i ') issuing from the memory U (401) and, respectively, of the element V (0) recorded in the third register (406c), and element V (1) recorded in the fourth register (406c). This adder module (403a, 403b) thus makes it possible to generate the elements of the four subsets (Sk) k = 1 ..... 4 defined above. The outputs of each adder (403a1, 403a2, 403b1, 403b2) of this module respectively feed four FIFO type memories (for First-ln, First Out) (for example, implemented in the form of RAM configured in 4 FIFOs) ( 405a, 405b, 405c, 405d), which therefore each implement one of the four subsets (Sk) k = 1 ..... 4 defined above: the first FIFO memory (405a), of size at least equal to ( nm / 2) - 1, will receive values U (0) + V (j ') of the set S1, the second FIFO memory (405b), of size at least equal to (nm / 2) -2, will receive values U (i ') + V (0) of the set S2, the third FIFO memory (405c), of size at least equal to (nm / 2) -2, will receive values U (1) + V (j ') of the set S3, and the fourth FIFO memory (405d), of size at least equal to (nm / 2) - 3, will receive values U (i') + V (1) of the set 54. The outputs of the four FIFOs (405a, 405b, 405c, 405d) feed a comparison module (404). In the embodiment illustrated in the figure, this comparison module comprises 3 two-to-two comparators (404a, 404b, 404c), two two-to-two comparators (404a, 404b) selecting the smallest value of LLR between the outputs of first and second FIFOs (405a, 405b), and third and fourth FIFOs (405c, 405d), respectively, and the third comparator (404c) selecting the smallest LLR value between the outputs of the first two. The output values of the node 400 are produced by this third two-to-two comparator (404c). The elementary parity node further comprises a control unit (407), configured to control in particular the read / write of the input memories U and V (401, 402) and FIFO memories (405a, 405b, 405c, 405d). the adder module (403a, 403b) as well as the comparator module (404). The comparison module (404) performs a comparison and generates an element of the output message E at each calculation cycle of the control unit (407), when there is at least one element in each FIFO memory ( 405a, 405b, 405c, 405d). Thus, for each computation cycle, the comparison module (404) selects the most reliable symbol from among its four inputs and updates the output message E. The FIFO memory which contained the selected symbol is updated to present in comparator input (404) a new pair (symbol, LLR) for comparison of the next cycle. In one embodiment, the four FIFO-type memories receive the values respectively corresponding to the four subsets corresponding to an exploration path of the matrix during the initialization phase of the algorithm. In other words, the values of each subset are first calculated, and the pre-calculated values are recorded in the FIFO type memories during the initialization of the exploration algorithm. As a variant, it is possible to pre-calculate for each subset of selection only a number of values corresponding to the size of the corresponding FIFO memory. For example, by taking up the four subsets (Sk) k = 1 ...., which are respectively associated with the "bubbles" described above and defined by the following equations: = [U (0) + V (/); / E {0, - 1}] S2 = [V (0) + Ki); E {1,, nm - 1}] S3 = [U (1) + V (/); / E nn- 11] S4 = [V (1) + Ki); E 12, ..., z-11], the first (nm / 2) - 1 values of Si_ that are stored in the first FIFO memory (405a) of selected size equal to (nm / 1) are pre-calculated. 2) - 1, the first (S2 / 2) - 2 values of S2, which are set in the second FIFO memory (405b) of selected size equal to (nm / 2) -2, are pre-calculated. calculates the (nm / 2) - first 2 values of S3 that are recorded in the third FIFO memory (405c) of selected size equal to (nm / 2) -2, and the (nm / 2) pre-calculates 3 first values of 54 that are recorded in the fourth FIFO memory (405d) of selected size equal to (nm / 2) - 3. Thus, each iteration of the scanning algorithm is reduced to only two comparisons. in-two to select a value among the first output values of the four FIFOs, the step of replacing the selected value io is limited to updating the output index of the FIFO memory from which came the select value Onnée. This architecture proves to be more efficient than an architecture in which the elements of the subsets corresponding to the predefined paths of the bubbles are not pre-calculated. This may be surprising to the extent that more operations are performed for the pre-calculation of subset elements than in an algorithm in which these elements are computed as the algorithm is executed. to replace a selected element in one of the memories FIF0s. This improvement in performance results from the fact that the control loop of the algorithm no longer includes an element replacement calculation operation of the selected element, but simply an index update in the FIFO memory of which he comes from. The control loop of the algorithm in this embodiment is therefore simplified, which gives the architecture using FIFO type memories increased efficiency. The pre-calculation of the elements stored in FIFO memory further makes it possible to reduce the size of the memories used by the exploration algorithm. Indeed, it is possible, in one embodiment, to no longer use the memories U (401) and V (402), and directly feed the FIFO memories during the pre-calculation phase of the elements of the bubble paths. Of course, the proposed architecture is not limited to a particular embodiment of the memories, and in particular FIF0s memories. It may in fact be memory means configured in a software manner in the form of FIFO-type stacks, or any other form of configuration of memory means for constituting FIFO-type stacks. FIG. 9 illustrates an implementation algorithm of the proposed method (500) corresponding to the architecture described above according to a particular embodiment. The values of the nbub (nbub EN; nbub> 1) memories of the FIFO type are initialized (501) to the subset values (Sk) k = 1, .... nbub which correspond to the elements of nbUb paths (of nbUb bubbles ) which they define respectively. An index m of loop is further initialized (502), for example at the value 1: m = 1. An iteration of the algorithm is then performed, with the selection (503) of a value, denoted Ss (m ), among the output values of the FIFO nbub memories, on the basis of a criterion relating to the corresponding degree of reliability. The elements recorded in the FIFO memory can be pairs (symbol, reliability), in which case selecting Ss (m) amounts to selecting the most reliable pair among the output value couples of each FIFO memory. Alternatively, the elements logged in the FIFO memory may be values representing reliability (such as log likelihood ratios LLRs as defined above) in which case the selection of Ss (m) consists in selecting the output element. corresponding to the best reliability. Depending on the representation chosen for the reliability values, this selection may consist of a comparison, for example two-to-two as explained above, of the output elements of the FIFO memories to select, for example, the element the smaller. The output message E is then updated (504) with the selected value: E = E u {Ss (m)}. The index m of iteration loop is then compared (505) with a predetermined end of iterative loop value mmax, for example chosen equal to rim, in order to verify whether or not the maximum number of iterations. In the case where less than mmax iterations have been performed, the loop index m is incremented (508) and a new iteration is initiated. Otherwise, the algorithm ends (506). The incrementation of the iteration index is followed or preceded by the updating of the output index of the FIFO memory whose element Ss (m) is derived. For example, if the selected element Ss (m) is derived from the index FIFO k, FIFOk, the output index of this FIFOk is incremented so that the next element Ss (m) in this FIFO be taken into account for the next iteration of the algorithm. Embodiments of the proposed methods may be, at least in part, implemented on virtually any type of computer, regardless of the platform used. For example, as shown in FIG. 2a, a computer system (100) comprises a data processing unit (105), which comprises one or more processors, such as a central processing unit (CPU) or another hardware processor, an associated memory (104) (e.g., random access memory (RAM), cache memory, flash memory, etc.), a data storage device (104) (for example, a hard disk, an optical disk such as a CD or DVD, a flash memory key, etc.), and many other elements and features typical of current computers (not shown). In general, the computer system (100) comprises at least the minimal processing, input and / or output means necessary to practice one or more embodiments of the proposed methods. For example, the processor (105) is adapted to be configured to execute a computer program comprising portions of code for implementing an output message generator, configured to generate elements of the output message of the node parity implemented by the system (100) according to different embodiments of the proposed method. Although the elementary parity node, and in particular the output message generator, are described as software, they can be implemented in hardware form or in the form of a combination of hardware and instructions. software. Depending on the embodiment chosen, certain acts, actions, events, or functions of each of the methods described herein may be performed or occur in a different order from that in which they were described, or may be added, merged or not to be performed or not to occur, as the case may be. In addition, in some embodiments, certain acts, actions or events are performed or occur concurrently and not successively. Although described through a number of detailed exemplary embodiments, the method of managing a proposed parity node calculating unit and the computing unit for implementing the method comprise different variants, modifications and improvements. which will become apparent to those skilled in the art, it being understood that these various variants, modifications and improvements are within the scope of the invention, as defined by the following claims. In particular, although the various embodiments described above implement a number of bubbles equal to 4, the management method of a proposed parity node calculation unit and the calculation unit for the implementation of process can be implemented with another number of bubbles nbUb greater than or equal to two. [0018] In addition, although the various embodiments described above implement addition operations between input message elements, the invention is also applicable to calculation operations cf) where cf) is a function that satisfies the following property: if ab and cd, then cp (a, c) cp (b, d). Similarly, although the invention has been described in its application to elements of a Galois body GF (q), it is applicable to the elements of a set provided with at least one law of internal composition. In addition, various aspects and features described above may be implemented together, or separately, or substituted for each other, and all of the various combinations and sub-combinations of aspects and features are within the scope of the invention. 'invention. In addition, some of the systems and equipment described above may not incorporate all of the modules and features described for the preferred embodiments.
权利要求:
Claims (17) [0001] REVENDICATIONS1. A method of managing a parity node computation unit of an error correction code decoder capable of being represented by a bipartite graph comprising at least one parity node, the parity node being configured to receive first (U. ) and second (V) input messages, and outputting an output message (E), the input and output message elements of the parity node including a symbol and a degree of reliability associated with the symbol, the first and second input messages comprising lists U (i) and V (j) of length nm, elements ordered according to their degree of reliability, the method comprising: - initializing a plurality of nbub memories FIFO type with elements calculated from combinations according to a calculation operation (j) (U (i), V (j)) of elements of the first and second input messages, said computing operation cp being such that if ab and then (p (a, c) cp (b, d) and - determine values of the output message iteratively, an iteration of order m, m being an integer greater than or equal to zero, comprising: selecting from the output elements of the FIFO type memories an element Ss (m) having the best degree of reliability; generating an element of the output message (E) from the selected element Ss (m); updating the output index of the FIFO memory from which the selected element Ss (m) is derived. [0002] 2. The method of claim 1, wherein nbub is equal to four, and wherein the four FIFO type memories are initialized with elements of 4 subsets S1, S2, S3, and S4, generated by combining the degrees of reliability respectively associated with the elements U (i) and V (j) of the input messages, as follows: S1 = o (U (0), V (1)); 1 E [0, ..., nr, '- 1)] S2 = [q) (V (0), U (1)); 1 E {1,, nm, - 1}] S3 = [cp (U (1), V (1)); 1 E {1, - 1}] 54 = frp (V (1), U (1)); 1 E 12 -, [0003] 3. Method according to claim 2, in which the 4 FIFO-type memories are initialized according to their sizes, which are respectively chosen to be: 71r1-1, z-2, nt-2, and -3. [0004] A method as claimed in any one of the preceding claims, wherein generating an item of the output message (E) from the selected item Ss (m) is performed only if the output message E does not occur. not already contain said element. [0005] The method of any of the preceding claims, wherein the degree of reliability of each element is represented by a likelihood ratio logarithm. [0006] The method of any of the preceding claims, wherein the calculating operation cp is an addition. [0007] The method of any of the preceding claims, wherein the error correction code decoder is a non-binary LDPC code decoder. [0008] 8. Computer program, loadable in a memory associated with a processor, and comprising portions of code for implementing the steps of a method according to any one of claims 1 to 7 during the execution of said program by the processor. [0009] The computer program of claim 8, wherein code portions of said program are obtained by decompressing or decoding a set of data stored in a memory. [0010] A non-transitory storage medium of a computer executable program, comprising a data set representing one or more programs, said one or more programs including instructions for, when executing said one or more programs by a computer comprising a processing unit operatively coupled to memory means and an input / output interface module, driving the computer to manage a parity node computing unit according to the method of any one of claims 1 to 7 . [0011] A parity node calculating unit of an error correction code decoder capable of being represented by a bipartite graph, the parity node being configured to receive first (U) and second (V) input messages, and producing an output message (E), the elements of the input and output messages of the parity node comprising a symbol and a degree of reliability associated with the symbol, the first and second input messages comprising lists U (i) and V (j) of length nn, elements ordered according to their degree of reliability, the computing unit comprising: a computer processor operatively coupled to memory means, the memory means being configured as a plurality of nbUb FIFO type memories; an output message generator, executing on the computer processor and configured to: initialize the plurality of FIFO type nbub memories with elements calculated from combinations according to a calculation operation cp (U (i), V (j)) elements of the first and second input messages, said computing operation (p being such that if ab and c_d, then (p (a, c) (p (b, d); values of the output message iteratively, an iteration of order m, m being an integer greater than or equal to zero, comprising: selecting from the output elements of the FIFO type memories an element Ss (m) having the best degree of reliability, o generating an element of the output message (E) from the selected element Ss (m), o updating the output index of the FIFO memory from which the selected element Ss (m) ) is from. [0012] The calculation unit according to claim 11, wherein nbub is equal to four, and wherein the four FIFO type memories are initialized with elements of 4 subsets S1, S2, S3, and S4, generated by combining the degrees of reliability respectively associated with the elements U (i) and V (j) of the input messages, as follows: Sl = [(p (U (0), V (1)); 1 E {0, .. ., nm - 1)] S2 = [(p (V (0), U (1)); 1E [1, ..., nm 1}] S3 = o (U (1), V (1) 1 E - 111 S4 = [v (V (1), U (1)); 1E 12,, ntn - 111, [0013] 13. Computer unit according to claim 12, wherein the four FIFO type memories are initialized according to their sizes, which are chosen respectively equal to: -ne-1, 2- 2, L1- "- 2, and ntn - 3. 10 [0014] The computing unit according to any one of claims 12 and 13, wherein generating an item of the output message (E) from the selected item Ss (m) is performed only if the message output E does not already contain said element. [0015] 15. A computing unit according to any one of claims 12 to 14, wherein the degree of reliability of each element is represented by a likelihood ratio logarithm. [0016] 16. A computing unit according to any of claims 12 to 15, wherein the computing operation 'p is an addition. [0017] The computing unit of any one of claims 12 to 16, wherein the error correction code decoder is a non-binary LDPC code decoder.
类似技术:
公开号 | 公开日 | 专利标题 FR3016259A1|2015-07-10|METHOD FOR MANAGING A PARITY NODE CALCULATION UNIT, EQUIPMENT AND SOFTWARE FOR IMPLEMENTING THE METHOD EP2095512B1|2012-04-18|Method and device for decoding ldpc codes and communication apparatus including such device US10298261B2|2019-05-21|Reduced complexity non-binary LDPC decoding algorithm JP5315693B2|2013-10-16|Encoder and decoder based on LDPC encoding EP2903166A1|2015-08-05|Multiple-vote symbol-flipping decoder for non-binary LDPC codes US9356623B2|2016-05-31|LDPC decoder variable node units having fewer adder stages WO2013117076A1|2013-08-15|Method and system for iterative decoding FR3006133A1|2014-11-28|METHOD OF DECODING A CORRECTIVE CODE, FOR EXAMPLE A TURBO-CODE, BY ANALYZING THE EXTENDED SPECTRUM OF THE WORDS OF THE CODE Schläfer et al.2015|Syndrome based check node processing of high order NB-LDPC decoders FR2952252A1|2011-05-06|METHOD AND DEVICE FOR DECODING, COMPUTER PROGRAM PRODUCT, CORRESPONDING MEANS OF STORAGE AND CORRESPONDING DESTINATION NODE EP1959572B1|2018-08-15|Decoding method with message passing and forced convergence US10554226B2|2020-02-04|Method for controlling a check node of a NB-LDPC decoder and corresponding check node FR2871631A1|2005-12-16|METHOD FOR ITERACTIVE DECODING OF BLOCK CODES AND CORRESPONDING DECODER DEVICE JP2016208309A|2016-12-08|Error correction decoding device, reception device and error correction decoding method Tian et al.2017|A 21.66 Gbps nonbinary LDPC decoder for high-speed communications EP2427967B1|2013-04-03|Method for processing un elementary parity node of a non-binary ldpc decoder and the corresponding elementary parity node processor FR3009462B1|2019-08-23|IMPROVED METHOD OF DECODING A CORRECTING CODE WITH MESSAGE PASSAGE, ESPECIALLY FOR DECODING LDPC CODES OR TURBO CODES JP2008153874A|2008-07-03|Soft decision decoding apparatus, soft decision decoding method, and soft decision decoding program Grospellier2019|Constant time decoding of quantum expander codes and application to fault-tolerant quantum computation FR2913836A1|2008-09-19|CODING AND DECODING VARIABLE YIELD DATA SIGNALS WO2021229184A1|2021-11-18|Method and device for decoding data stored in a dna-based storage system Shafiullah et al.2012|Optimized min-sum decoding algorithm for low density pc codes Kim et al.2013|High throughput LDPC decoder architecture for DVB-S2 Shen2014|Analysis and Error Performances of Convolutional Doubly Orthogonal Codes with Non-Binary Alphabets WO2015107315A1|2015-07-23|Encoding and decoding linear code from lattices
同族专利:
公开号 | 公开日 US10361723B2|2019-07-23| KR102274328B1|2021-07-07| EP3092717B1|2020-02-26| WO2015104275A1|2015-07-16| US20160336967A1|2016-11-17| HK1232347A1|2018-01-05| CN106464268B|2020-03-10| CN106464268A|2017-02-22| FR3016259B1|2017-09-08| KR20170020305A|2017-02-22| EP3092717A1|2016-11-16|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 FR2675968B1|1991-04-23|1994-02-04|France Telecom|METHOD FOR DECODING A CONVOLUTIVE CODE WITH MAXIMUM LIKELIHOOD AND WEIGHTING OF DECISIONS, AND CORRESPONDING DECODER.| JP4224777B2|2003-05-13|2009-02-18|ソニー株式会社|Decoding method, decoding apparatus, and program| US7174495B2|2003-12-19|2007-02-06|Emmanuel Boutillon|LDPC decoder, corresponding method, system and computer program| KR100933139B1|2006-02-22|2009-12-21|삼성전자주식회사|Apparatus and method for receiving signal in communication system| CN1996764A|2007-01-10|2007-07-11|北京航空航天大学|Parity verification matrix based decoding method and decoder of the LDPC code| FR2945391A1|2009-05-05|2010-11-12|Univ Bretagne Sud|METHOD FOR CONTROLLING A COMPUTING UNIT, SUCH AS AN ELEMENTARY PARITY NODE IN A NON-BINARY LDPC CODE DECODER, AND CORRESPONDING CALCULATION UNIT| FR2950209B1|2009-09-14|2011-10-14|St Microelectronics Sa|METHOD OF ELEMENTARY UPDATING A CONTROL NODE DURING AN ENCODED BLOCK DECODING WITH A NON-BINARY LDPC CODE, AND CORRESPONDING DECODER| US8954820B2|2012-02-10|2015-02-10|Stec, Inc.|Reduced complexity non-binary LDPC decoding algorithm| US8918704B2|2012-03-15|2014-12-23|David Declercq|Decoding method and apparatus for non-binary, low-density, parity check codes| CN104185952B|2012-03-28|2018-11-13|英特尔公司|The method and system of elementary test node processing| KR102075946B1|2013-12-27|2020-02-11|삼성전자주식회사|Method and apparatus for decoding of nonbinary parity-check codes in broadcasting and communication systems|KR102080069B1|2013-09-25|2020-04-14|삼성전자주식회사|Apparatua and method for decoding data in a receiver using a nonbinary low density parity check code| EP3086474B1|2015-04-24|2017-11-22|Universite De Bretagne Sud|Method for controlling a check node of a nb-ldpc decoder and corresponding check node| EP3242405A1|2016-05-02|2017-11-08|Université de Bretagne Sud|Non-binary check node processing with pre-sorted input| EP3419179A1|2017-06-19|2018-12-26|Universite De Bretagne Sud|Hybrid architectures for check node processing of extended min-sumdecoding of non-binary ldpc codes| EP3591845A1|2018-07-05|2020-01-08|Universite De Bretagne Sud|Sorting device and method for elementary check node processing for message-passing decoding of non-binary codes|
法律状态:
2015-01-08| PLFP| Fee payment|Year of fee payment: 2 | 2015-12-23| PLFP| Fee payment|Year of fee payment: 3 | 2017-05-11| PLFP| Fee payment|Year of fee payment: 4 | 2017-12-26| PLFP| Fee payment|Year of fee payment: 5 | 2018-12-27| PLFP| Fee payment|Year of fee payment: 6 | 2019-12-30| PLFP| Fee payment|Year of fee payment: 7 | 2020-12-22| PLFP| Fee payment|Year of fee payment: 8 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1450086A|FR3016259B1|2014-01-07|2014-01-07|METHOD FOR MANAGING A PARITY NODE CALCULATION UNIT, EQUIPMENT AND SOFTWARE FOR IMPLEMENTING THE METHOD|FR1450086A| FR3016259B1|2014-01-07|2014-01-07|METHOD FOR MANAGING A PARITY NODE CALCULATION UNIT, EQUIPMENT AND SOFTWARE FOR IMPLEMENTING THE METHOD| KR1020167021517A| KR102274328B1|2014-01-07|2015-01-07|Method for managing a check node calculating unit, equipment and software for implementing the method| CN201580005069.6A| CN106464268B|2014-01-07|2015-01-07|Method for managing check node computing devices, and device and software for implementing the method| PCT/EP2015/050135| WO2015104275A1|2014-01-07|2015-01-07|Decoding of non-binary ldpc codes| US15/110,349| US10361723B2|2014-01-07|2015-01-07|Decoding of non-binary LDPC codes| EP15700430.0A| EP3092717B1|2014-01-07|2015-01-07|Decoding of non-binary ldpc codes| HK17105733.2A| HK1232347A1|2014-01-07|2017-06-09|Decoding of non-binary ldpc codes ldpc| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|